Knowledge graph embedding (KGE), which maps entities and relations in a knowledge graph into continuous vector spaces, has achieved great success in predicting missing links in knowledge graphs. However, knowledge graphs often contain incomplete triples that are difficult to inductively infer by KGEs. To address this challenge, we resort to analogical inference and propose a novel and general self-supervised framework AnKGE to enhance KGE models with analogical inference capability. We propose an analogical object retriever that retrieves appropriate analogical objects from entity-level, relation-level, and triple-level. And in AnKGE, we train an analogy function for each level of analogical inference with the original element embedding from a well-trained KGE model as input, which outputs the analogical object embedding. In order to combine inductive inference capability from the original KGE model and analogical inference capability enhanced by AnKGE, we interpolate the analogy score with the base model score and introduce the adaptive weights in the score function for prediction. Through extensive experiments on FB15k-237 and WN18RR datasets, we show that AnKGE achieves competitive results on link prediction task and well performs analogical inference.
translated by 谷歌翻译
As an important variant of entity alignment (EA), multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) with multiple modalities like images. However, current MMEA algorithms all adopt KG-level modality fusion strategies but ignore modality differences among individual entities, hurting the robustness to potential noise involved in modalities (e.g., unidentifiable images and relations). In this paper we present MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, to dynamically predict the mutual correlation coefficients among modalities for instance-level feature fusion. A modal-aware hard entity replay strategy is also proposed for addressing vague entity details. Extensive experimental results show that our model not only achieves SOTA performance on multiple training scenarios including supervised, unsupervised, iterative, and low resource, but also has limited parameters, optimistic speed, and good interpretability. Our code will be available soon.
translated by 谷歌翻译
One of the key challenges in deploying RL to real-world applications is to adapt to variations of unknown environment contexts, such as changing terrains in robotic tasks and fluctuated bandwidth in congestion control. Existing works on adaptation to unknown environment contexts either assume the contexts are the same for the whole episode or assume the context variables are Markovian. However, in many real-world applications, the environment context usually stays stable for a stochastic period and then changes in an abrupt and unpredictable manner within an episode, resulting in a segment structure, which existing works fail to address. To leverage the segment structure of piecewise stable context in real-world applications, in this paper, we propose a \textit{\textbf{Se}gmented \textbf{C}ontext \textbf{B}elief \textbf{A}ugmented \textbf{D}eep~(SeCBAD)} RL method. Our method can jointly infer the belief distribution over latent context with the posterior over segment length and perform more accurate belief context inference with observed data within the current context segment. The inferred belief context can be leveraged to augment the state, leading to a policy that can adapt to abrupt variations in context. We demonstrate empirically that SeCBAD can infer context segment length accurately and outperform existing methods on a toy grid world environment and Mujuco tasks with piecewise-stable context.
translated by 谷歌翻译
The goal of multimodal abstractive summarization (MAS) is to produce a concise summary given the multimodal data (text and vision). Existing studies on MAS mainly focus on how to effectively use the extracted visual features, having achieved impressive success on the high-resource English dataset. However, less attention has been paid to the quality of the visual features to the summary, which may limit the model performance especially in the low- and zero-resource scenarios. In this paper, we propose to improve the summary quality through summary-oriented visual features. To this end, we devise two auxiliary tasks including \emph{vision to summary task} and \emph{masked image modeling task}. Together with the main summarization task, we optimize the MAS model via the training objectives of all these tasks. By these means, the MAS model can be enhanced by capturing the summary-oriented visual features, thereby yielding more accurate summaries. Experiments on 44 languages, covering mid-high-, low-, and zero-resource scenarios, verify the effectiveness and superiority of the proposed approach, which achieves state-of-the-art performance under all scenarios.
translated by 谷歌翻译
This paper introduces the joint submission of the Beijing Jiaotong University and WeChat AI to the WMT'22 chat translation task for English-German. Based on the Transformer, we apply several effective variants. In our experiments, we utilize the pre-training-then-fine-tuning paradigm. In the first pre-training stage, we employ data filtering and synthetic data generation (i.e., back-translation, forward-translation, and knowledge distillation). In the second fine-tuning stage, we investigate speaker-aware in-domain data generation, speaker adaptation, prompt-based context modeling, target denoising fine-tuning, and boosted self-COMET-based model ensemble. Our systems achieve 0.810 and 0.946 COMET scores. The COMET scores of English-German and German-English are the highest among all submissions.
translated by 谷歌翻译
Video super-resolution is one of the most popular tasks on mobile devices, being widely used for an automatic improvement of low-bitrate and low-resolution video streams. While numerous solutions have been proposed for this problem, they are usually quite computationally demanding, demonstrating low FPS rates and power efficiency on mobile devices. In this Mobile AI challenge, we address this problem and propose the participants to design an end-to-end real-time video super-resolution solution for mobile NPUs optimized for low energy consumption. The participants were provided with the REDS training dataset containing video sequences for a 4X video upscaling task. The runtime and power efficiency of all models was evaluated on the powerful MediaTek Dimensity 9000 platform with a dedicated AI processing unit capable of accelerating floating-point and quantized neural networks. All proposed solutions are fully compatible with the above NPU, demonstrating an up to 500 FPS rate and 0.2 [Watt / 30 FPS] power consumption. A detailed description of all models developed in the challenge is provided in this paper.
translated by 谷歌翻译
子格式微型航空车(MAV)中的准确而敏捷的轨迹跟踪是具有挑战性的,因为机器人的小规模会引起大型模型不确定性,要求强大的反馈控制器,而快速的动力学和计算约束则阻止了计算上昂贵的策略的部署。在这项工作中,我们提出了一种在MIT SoftFly(一个子)MAV(0.7克)上进行敏捷和计算有效轨迹跟踪的方法。我们的策略采用了级联的控制方案,在该方案中,自适应态度控制器与受过训练的神经网络政策相结合,以模仿轨迹跟踪可靠的管模型模型预测控制器(RTMPC)。神经网络政策是使用我们最近的工作获得的,这使该政策能够保留RTMPC的稳健性,但以其计算成本的一小部分。我们通过实验评估我们的方法,即使在更具挑战性的操作中,达到均方根误差也低于1.8 cm,与我们先前的工作相比,最大位置误差减少了60%,并证明了对大型外部干扰的稳健性
translated by 谷歌翻译
归纳链路预测(ILP)是考虑到新兴知识图(kgs)中未见实体的联系,考虑到KGS的发展性质。一个更具挑战性的场景是,新兴的kg仅由看不见的实体组成,被称为已断开新兴kgs(DEKGS)。 DEKGS的现有研究仅专注于预测封闭链接,即预测新兴KG内部的联系。到目前为止,先前的工作尚未对将进化信息从原始KG到DEKG进行进化信息。为了填补空白,我们提出了一个名为DEKG-ILP的新型模型(由以下两个组成部分组成的dekg-ilp(断开新兴知识图形的归纳链路预测)。 (1)模块CLRM(基于对比的关系特定特征特征建模)是为了提取基于全球关系的语义特征而开发的,它们在原始KGS和DEKGS之间以新颖的采样策略共享。 (2)提出了模块GSM(基于GNN的子图建模),以提取围绕KGS中每个链接的局部子图拓扑信息。在几个基准数据集上进行的广泛实验表明,与最新方法相比,DEKG-ILP具有明显的性能改进,用于封闭和桥接链路预测。源代码可在线获得。
translated by 谷歌翻译
基于多模式方面的情感分类(MABSC)是一项新兴的分类任务,旨在将给定目标的情感分类,例如具有不同模式的数据中提到的实体。在带有文本和图像的典型多模式数据中,以前的方法不能充分利用图像的细颗粒语义,尤其是与文本的语义结合在一起,并且不完全考虑对细粒图像之间的关系进行建模信息和目标,这导致图像的使用不足和不足以识别细粒度的方面和意见。为了应对这些局限性,我们提出了一个新的框架SEQCSG,包括一种构建顺序跨模式语义图和编码器模型的方法。具体而言,我们从原始图像,图像标题和场景图中提取细粒度的信息,并将它们视为跨模式语义图的元素以及文本的令牌。跨模式语义图表示为具有多模式可见矩阵的序列,指示元素之间的关系。为了有效地利用跨模式语义图,我们建议使用目标提示模板的编码器解码器方法。实验结果表明,我们的方法优于现有方法,并在两个标准数据集MABSC上实现了最新方法。进一步的分析证明了每个组件的有效性,我们的模型可以隐含地学习图像的目标和细粒度信息之间的相关性。
translated by 谷歌翻译
视觉问题回答(VQA)通常需要对视觉概念和语言语义的理解,这取决于外部知识。大多数现有方法利用了预训练的语言模型或/和非结构化文本,但是这些资源中的知识通常不完整且嘈杂。有些方法更喜欢使用经常具有强化结构知识的知识图(kgs),但是研究仍然相当初步。在本文中,我们提出了Lako,这是一种知识驱动的VQA方法,通过后期的文本注射。为了有效地纳入外部kg,我们将三元三元转移到文本中,并提出一种晚期注射机制。最后,我们将VQA作为文本生成任务,并具有有效的编码器范式。在使用OKVQA数据集的评估中,我们的方法可实现最新的结果。
translated by 谷歌翻译